28 research outputs found
Dynamic Smooth Compressed Quadtrees
We introduce dynamic smooth (a.k.a. balanced) compressed quadtrees with worst-case constant time updates in constant dimensions. We distinguish two versions of the problem. First, we show that quadtrees as a space-division data structure can be made smooth and dynamic subject to split and merge operations on the quadtree cells. Second, we show that quadtrees used to store a set of points in R^d can be made smooth and dynamic subject to insertions and deletions of points. The second version uses the first but must additionally deal with compression and alignment of quadtree components. In both cases our updates take 2^{O(d log d)} time, except for the point location part in the second version which has a lower bound of Omega(log n); but if a pointer (finger) to the correct quadtree cell is given, the rest of the updates take worst-case constant time. Our result implies that several classic and recent results (ranging from ray tracing to planar point location) in computational geometry which use quadtrees can deal with arbitrary point sets on a real RAM pointer machine
Topological Stability of Kinetic -Centers
We study the -center problem in a kinetic setting: given a set of
continuously moving points in the plane, determine a set of (moving)
disks that cover at every time step, such that the disks are as small as
possible at any point in time. Whereas the optimal solution over time may
exhibit discontinuous changes, many practical applications require the solution
to be stable: the disks must move smoothly over time. Existing results on this
problem require the disks to move with a bounded speed, but this model is very
hard to work with. Hence, the results are limited and offer little theoretical
insight. Instead, we study the topological stability of -centers.
Topological stability was recently introduced and simply requires the solution
to change continuously, but may do so arbitrarily fast. We prove upper and
lower bounds on the ratio between the radii of an optimal but unstable solution
and the radii of a topologically stable solution---the topological stability
ratio---considering various metrics and various optimization criteria. For we provide tight bounds, and for small we can obtain nontrivial
lower and upper bounds. Finally, we provide an algorithm to compute the
topological stability ratio in polynomial time for constant
Trajectory Visibility
We study the problem of testing whether there exists a time at which two entities moving along different piece-wise linear trajectories among polygonal obstacles are mutually visible. We study several variants, depending on whether or not the obstacles form a simple polygon, trajectories may intersect the polygon edges, and both or only one of the entities are moving. For constant complexity trajectories contained in a simple polygon with n vertices, we provide an (n) time algorithm to test if there is a time at which the entities can see each other. If the polygon contains holes, we present an (n log n) algorithm. We show that this is tight. We then consider storing the obstacles in a data structure, such that queries consisting of two line segments can be efficiently answered. We show that for all variants it is possible to answer queries in sublinear time using polynomial space and preprocessing time. As a critical intermediate step, we provide an efficient solution to a problem of independent interest: preprocess a convex polygon such that we can efficiently test intersection with a quadratic curve segment. If the obstacles form a simple polygon, this allows us to answer visibility queries in (n³/4log³ n) time using (nlog⁵ n) space. For more general obstacles the query time is (log^k n), for a constant but large value k, using (n^{3k}) space. We provide more efficient solutions when one of the entities remains stationary
Smoothing the gap between NP and ER
We study algorithmic problems that belong to the complexity class of the
existential theory of the reals (ER). A problem is ER-complete if it is as hard
as the problem ETR and if it can be written as an ETR formula. Traditionally,
these problems are studied in the real RAM, a model of computation that assumes
that the storage and comparison of real-valued numbers can be done in constant
space and time, with infinite precision. The complexity class ER is often
called a real RAM analogue of NP, since the problem ETR can be viewed as the
real-valued variant of SAT.
In this paper we prove a real RAM analogue to the Cook-Levin theorem which
shows that ER membership is equivalent to having a verification algorithm that
runs in polynomial-time on a real RAM. This gives an easy proof of
ER-membership, as verification algorithms on a real RAM are much more versatile
than ETR-formulas. We use this result to construct a framework to study
ER-complete problems under smoothed analysis. We show that for a wide class of
ER-complete problems, its witness can be represented with logarithmic
input-precision by using smoothed analysis on its real RAM verification
algorithm. This shows in a formal way that the boundary between NP and ER
(formed by inputs whose solution witness needs high input-precision) consists
of contrived input. We apply our framework to well-studied ER-complete
recognition problems which have the exponential bit phenomenon such as the
recognition of realizable order types or the Steinitz problem in fixed
dimension.Comment: 31 pages, 11 figures, FOCS 2020, SICOMP 202
Dynamic Dynamic Time Warping
The Dynamic Time Warping (DTW) distance is a popular similarity measure for
polygonal curves (i.e., sequences of points). It finds many theoretical and
practical applications, especially for temporal data, and is known to be a
robust, outlier-insensitive alternative to the \frechet distance. For static
curves of at most points, the DTW distance can be computed in time
in constant dimension. This tightly matches a SETH-based lower bound, even for
curves in .
In this work, we study \emph{dynamic} algorithms for the DTW distance. Here,
the goal is to design a data structure that can be efficiently updated to
accommodate local changes to one or both curves, such as inserting or deleting
vertices and, after each operation, reports the updated DTW distance. We give
such a data structure with update and query time , where
is the maximum length of the curves.
As our main result, we prove that our data structure is conditionally
\emph{optimal}, up to subpolynomial factors. More precisely, we prove that,
already for curves in , there is no dynamic algorithm to maintain
the DTW distance with update and query time~\makebox{} for
any constant , unless the Negative--Clique Hypothesis fails. In
fact, we give matching upper and lower bounds for various trade-offs between
update and query time, even in cases where the lengths of the curves differ.Comment: To appear at SODA2
Adaptive Out-Orientations with Applications
We give simple algorithms for maintaining edge-orientations of a
fully-dynamic graph, such that the out-degree of each vertex is bounded. On one
hand, we show how to orient the edges such that the out-degree of each vertex
is proportional to the arboricity of the graph, in a worst-case update
time of . On the other hand, motivated by applications
in dynamic maximal matching, we obtain a different trade-off, namely the
improved worst case update time of for the problem of
maintaining an edge-orientation with at most out-edges per
vertex. Since our algorithms have update times with worst-case guarantees, the
number of changes to the solution (i.e. the recourse) is naturally limited.
Our algorithms make choices based entirely on local information, which makes
them automatically adaptive to the current arboricity of the graph. In other
words, they are arboricity-oblivious, while they are arboricity-sensitive. This
both simplifies and improves upon previous work, by having fewer assumptions or
better asymptotic guarantees.
As a consequence, one obtains an algorithm with improved efficiency for
maintaining a approximation of the maximum subgraph density,
and an algorithm for dynamic maximal matching whose worst-case update time is
guaranteed to be upper bounded by , where
is the arboricity at the time of the update
Simple and Robust Dynamic Two-Dimensional Convex Hull
The convex hull of a data set is the smallest convex set that contains
.
In this work, we present a new data structure for convex hull, that allows
for efficient dynamic updates. In a dynamic convex hull implementation, the
following traits are desirable: (1) algorithms for efficiently answering
queries as to whether a specified point is inside or outside the hull, (2)
adhering to geometric robustness, and (3) algorithmic simplicity.Furthermore, a
specific but well-motivated type of two-dimensional data is rank-based data.
Here, the input is a set of real-valued numbers where for any number its rank is its index in 's sorted order. Each value in can be mapped
to a point to obtain a two-dimensional point set. In this work,
we give an efficient, geometrically robust, dynamic convex hull algorithm, that
facilitates queries to whether a point is internal. Furthermore, our
construction can be used to efficiently update the convex hull of rank-ordered
data, when the real-valued point set is subject to insertions and deletions.
Our improved solution is based on an algorithmic simplification of the
classical convex hull data structure by Overmars and van Leeuwen~[STOC'80],
combined with new algorithmic insights. Our theoretical guarantees on the
update time match those of Overmars and van Leeuwen, namely ,
while we allow a wider range of functionalities (including rank-based data).
Our algorithmic simplification includes simplifying an 11-case check down to a
3-case check that can be written in 20 lines of easily readable C-code. We
extend our solution to provide a trade-off between theoretical guarantees and
the practical performance of our algorithm. We test and compare our solutions
extensively on inputs that were generated randomly or adversarially, including
benchmarking datasets from the literature.Comment: Accepted for ALENEX2